DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...9
Hits 1 – 20 of 163

1
Probing for the Usage of Grammatical Number ...
BASE
Show details
2
Estimating the Entropy of Linguistic Distributions ...
BASE
Show details
3
A Latent-Variable Model for Intrinsic Probing ...
BASE
Show details
4
On Homophony and Rényi Entropy ...
BASE
Show details
5
On Homophony and Rényi Entropy ...
BASE
Show details
6
On Homophony and Rényi Entropy ...
BASE
Show details
7
Towards Zero-shot Language Modeling ...
BASE
Show details
8
Differentiable Generative Phonology ...
BASE
Show details
9
Finding Concept-specific Biases in Form--Meaning Associations ...
BASE
Show details
10
Searching for Search Errors in Neural Morphological Inflection ...
BASE
Show details
11
Applying the Transformer to Character-level Transduction ...
Wu, Shijie; Cotterell, Ryan; Hulden, Mans. - : ETH Zurich, 2021
BASE
Show details
12
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models ...
BASE
Show details
13
Probing as Quantifying Inductive Bias ...
BASE
Show details
14
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
15
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
16
Conditional Poisson Stochastic Beams ...
BASE
Show details
17
Examining the Inductive Bias of Neural Language Models with Artificial Languages ...
BASE
Show details
18
Modeling the Unigram Distribution ...
Abstract: Read paper: https://www.aclanthology.org/2021.findings-acl.326 Abstract: The unigram distribution is the non-contextual probability of finding a specific word form in a corpus. While of central importance to the study of language, it is commonly approximated by each word's sample frequency in the corpus. This approach, being highly dependent on sample size, assigns zero probability to any out-of-vocabulary (oov) word form. As a result, it produces negatively biased probabilities for any oov word form, while positively biased probabilities to in-corpus words. In this work, we argue in favor of properly modeling the unigram distribution---claiming it should be a central task in natural language processing. With this in mind, we present a novel model for estimating it in a language (a neuralization of Goldwater et al.'s (2011) model) and show it produces much better estimates across a diverse set of 7 languages than the naïive use of neural character-level language models. ...
URL: https://dx.doi.org/10.48448/fx5z-4a29
https://underline.io/lecture/26417-modeling-the-unigram-distribution
BASE
Hide details
19
Language Model Evaluation Beyond Perplexity ...
BASE
Show details
20
Differentiable Subset Pruning of Transformer Heads ...
BASE
Show details

Page: 1 2 3 4 5...9

Catalogues
0
0
0
0
0
0
1
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
162
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern